17 research outputs found

    3D Geometric Analysis of Tubular Objects based on Surface Normal Accumulation

    Get PDF
    This paper proposes a simple and efficient method for the reconstruction and extraction of geometric parameters from 3D tubular objects. Our method constructs an image that accumulates surface normal information, then peaks within this image are located by tracking. Finally, the positions of these are optimized to lie precisely on the tubular shape centerline. This method is very versatile, and is able to process various input data types like full or partial mesh acquired from 3D laser scans, 3D height map or discrete volumetric images. The proposed algorithm is simple to implement, contains few parameters and can be computed in linear time with respect to the number of surface faces. Since the extracted tube centerline is accurate, we are able to decompose the tube into rectilinear parts and torus-like parts. This is done with a new linear time 3D torus detection algorithm, which follows the same principle of a previous work on 2D arc circle recognition. Detailed experiments show the versatility, accuracy and robustness of our new method.Comment: in 18th International Conference on Image Analysis and Processing, Sep 2015, Genova, Italy. 201

    Fast extraction of neuron morphologies from large-scale SBFSEM image stacks

    Get PDF
    Neuron morphology is frequently used to classify cell-types in the mammalian cortex. Apart from the shape of the soma and the axonal projections, morphological classification is largely defined by the dendrites of a neuron and their subcellular compartments, referred to as dendritic spines. The dimensions of a neuron’s dendritic compartment, including its spines, is also a major determinant of the passive and active electrical excitability of dendrites. Furthermore, the dimensions of dendritic branches and spines change during postnatal development and, possibly, following some types of neuronal activity patterns, changes depending on the activity of a neuron. Due to their small size, accurate quantitation of spine number and structure is difficult to achieve (Larkman, J Comp Neurol 306:332, 1991). Here we follow an analysis approach using high-resolution EM techniques. Serial block-face scanning electron microscopy (SBFSEM) enables automated imaging of large specimen volumes at high resolution. The large data sets generated by this technique make manual reconstruction of neuronal structure laborious. Here we present NeuroStruct, a reconstruction environment developed for fast and automated analysis of large SBFSEM data sets containing individual stained neurons using optimized algorithms for CPU and GPU hardware. NeuroStruct is based on 3D operators and integrates image information from image stacks of individual neurons filled with biocytin and stained with osmium tetroxide. The focus of the presented work is the reconstruction of dendritic branches with detailed representation of spines. NeuroStruct delivers both a 3D surface model of the reconstructed structures and a 1D geometrical model corresponding to the skeleton of the reconstructed structures. Both representations are a prerequisite for analysis of morphological characteristics and simulation signalling within a neuron that capture the influence of spines

    Image analysis in light sheet fluorescence microscopy images of transgenic zebrafish vascular development

    Get PDF
    The zebrafish has become an established model to study vascular development and disease in vivo. However, despite it now being possible to acquire high-resolution data with state-of-the-art fluorescence microscopy, such as lightsheet microscopy, most data interpretation in pre-clinical neurovascular research relies on visual subjective judgement, rather than objective quantification. Therefore, we describe the development of an image analysis workflow towards the quantification and description of zebrafish neurovascular development. In this paper we focus on data acquisition by lightsheet fluorescence microscopy, data properties, image pre-processing, and vasculature segmentation, and propose future work to derive quantifications of zebrafish neurovasculature development

    From Projections to the 3D Analysis of the Regenerated Tissue

    No full text
    Computational imaging techniques such as X-ray computed tomography (CT) rely on a significant amount of computing. The acquired tomographic projections are digitally processed to reconstruct the final images of interest. This process is generically called reconstruction, and it includes additional steps prior to or at the end of the execution of an actual reconstruction algorithm. Most of these steps aim at improving image quality, mainly in terms of artifacts compensation and noise reduction. The reconstructed images are then digitally analyzed to derive quantitative data and to support the qualitative visual interpretation. This part involves computational approaches that fall within the generic term image segmentation. Pre- and post- segmentation image processing is often required to improve the final quantification and extract reliable data from a CT dataset. This chapter presents an overview of the reconstruction and segmentation fundamentals for the 3D analysis of high-resolution X-ray CT data. Better knowledge about artifacts and reconstruction issues avoid misinterpretation of the images. Similarly, more insights about the limitations of image segmentation and quantification help commenting the reliability of the derived numerical values. A deeper understanding of these elements is therefore beneficial to optimize the whole workflow that starts from sample preparation and leads to CT-based scientific results
    corecore